尽管近年来取得了惊人的进步,但最先进的音乐分离系统会产生具有显着感知缺陷的源估计,例如增加无关噪声或消除谐波。我们提出了一个后处理模型(MAKE听起来不错(MSG)后处理器),以增强音乐源分离系统的输出。我们将我们的后处理模型应用于最新的基于波形和基于频谱图的音乐源分离器,包括在训练过程中未见的分离器。我们对源分离器产生的误差的分析表明,波形模型倾向于引入更多高频噪声,而频谱图模型倾向于丢失瞬变和高频含量。我们引入了客观措施来量化这两种错误并显示味精改善了两种错误的源重建。众包主观评估表明,人类的听众更喜欢由MSG进行后处理的低音和鼓的来源估计。
translated by 谷歌翻译
The ability to compare the semantic similarity between text corpora is important in a variety of natural language processing applications. However, standard methods for evaluating these metrics have yet to be established. We propose a set of automatic and interpretable measures for assessing the characteristics of corpus-level semantic similarity metrics, allowing sensible comparison of their behavior. We demonstrate the effectiveness of our evaluation measures in capturing fundamental characteristics by evaluating them on a collection of classical and state-of-the-art metrics. Our measures revealed that recently-developed metrics are becoming better in identifying semantic distributional mismatch while classical metrics are more sensitive to perturbations in the surface text levels.
translated by 谷歌翻译
The field of emergent communication aims to understand the characteristics of communication as it emerges from artificial agents solving tasks that require information exchange. Communication with discrete messages is considered a desired characteristic, for both scientific and applied reasons. However, training a multi-agent system with discrete communication is not straightforward, requiring either reinforcement learning algorithms or relaxing the discreteness requirement via a continuous approximation such as the Gumbel-softmax. Both these solutions result in poor performance compared to fully continuous communication. In this work, we propose an alternative approach to achieve discrete communication -- quantization of communicated messages. Using message quantization allows us to train the model end-to-end, achieving superior performance in multiple setups. Moreover, quantization is a natural framework that runs the gamut from continuous to discrete communication. Thus, it sets the ground for a broader view of multi-agent communication in the deep learning era.
translated by 谷歌翻译
我们研究在计算和通信约束下分布式设置中高维稀疏线性回归的问题。具体来说,我们考虑了一个星形拓扑网络,该网络将几台机器连接到融合中心,他们可以与他们交换相对较短的消息。每台机器都有来自线性回归模型的嘈杂样品,该模型具有相同的未知稀疏$ d $ - 维数二维矢量$ \ theta $。融合中心的目标是使用几乎没有计算和有限的通信在每台机器上估算矢量$ \ theta $及其支持。在这项工作中,我们考虑基于正交匹配追求(OMP)的分布式算法,并理论上研究了他们精确收回$ \ theta $的支持的能力。我们证明,在某些条件下,即使在单个机器无法检测到$ \ theta $的支持下,分布式式方法在$ \ theta $的支持下,在$ d $中的总通信sublinear中正确恢复了它。此外,我们提出的模拟说明了基于分布式OMP的算法的性能,并表明它们的性能类似于更复杂和计算密集的方法,在某些情况下甚至表现优于它们。
translated by 谷歌翻译
当我们扩大数据集,模型尺寸和培训时间时,深入学习方法的能力中存在越来越多的经验证据。尽管有一些关于这些资源如何调节统计能力的说法,但对它们对模型培训的计算问题的影响知之甚少。这项工作通过学习$ k $ -sparse $ n $ bits的镜头进行了探索,这是一个构成理论计算障碍的规范性问题。在这种情况下,我们发现神经网络在扩大数据集大小和运行时间时会表现出令人惊讶的相变。特别是,我们从经验上证明,通过标准培训,各种体系结构以$ n^{o(k)} $示例学习稀疏的平等,而损失(和错误)曲线在$ n^{o(k)}后突然下降。 $迭代。这些积极的结果几乎匹配已知的SQ下限,即使没有明确的稀疏性先验。我们通过理论分析阐明了这些现象的机制:我们发现性能的相变不到SGD“在黑暗中绊倒”,直到它找到了隐藏的特征集(自然算法也以$ n^中的方式运行{o(k)} $ time);取而代之的是,我们表明SGD逐渐扩大了人口梯度的傅立叶差距。
translated by 谷歌翻译
文本生成模型已成为许多研究任务,尤其是句子语料库的生成焦点。但是,了解自动生成的文本语料库的属性仍然具有挑战性。我们建议一组检查生成文本语料库的属性的工具。将这些工具应用于各种生成的语料库中,使我们能够对生成模型的属性获得新的见解。作为我们特征过程的一部分,我们发现了两种主要生成技术产生的语料库存在显着差异。
translated by 谷歌翻译
在机器学习中,我们传统上评估单个模型的性能,平均在测试输入集合中进行平均。在这项工作中,我们提出了一种新方法:在$ \ textit {单个输入点} $上评估时,我们测量了模型集合的性能。具体来说,我们研究了一个点的$ \ textit {profile {profile} $:模型在测试分布上的平均性能与他们在该点上的角度表现之间的关系。我们发现配置文件可以在分布和分发的模型和数据的结构中产生新的见解。例如,我们从经验上表明,实际数据分布由具有质量不同的点组成。一方面,有“兼容”点,在角度和平均性能之间具有很强的相关性。另一方面,有些点具有弱甚至$ \ textit {nogate} $相关性:提高整体模型精度实际上$ \ textit {hurts} $性能的情况。我们证明,这些实验观察与先前工作中提出的几种简化学习模型的预测不一致。作为一个应用程序,我们使用配置文件来构造一个数据集,我们称为CIFAR-10-NENG:CINIC-10的子集,因此对于标准模型,CIFAR-10-NENG上的准确性为$ \ textit {negalissiper {negalissiperational {negalishatied} CIFAR-10测试。这首先说明了一个完全逆转“准确性”的OOD数据集(Miller,Taori,Raghunathan,Sagawa,Koh,Koh,Shankar,Liang,Carmon和Schmidt 2021)
translated by 谷歌翻译
通过潜在树形图形模型建模高维数据的分布是多个科学域中的一种普遍存在的方法。常见的任务是推断底层树结构,仅给出其终端节点的观察。树恢复的许多算法是计算密集型的,这将其适用于中等大小的树木。对于大树,一种共同的方法,被称为剥夺和征服,是以两步恢复树结构。首先,将结构分别恢复终端节点的多个可能随机子集。其次,合并生成的子树以形成一棵树。在这里,我们开发频谱自上而下的恢复(STDR),确定性分割和征服方法来推断出大潜在树模型。与以前的方法不同,STDR基于与观察到的节点相关的合适的LAPLACIAN矩阵的FIEDLER向量,以非随机方式分配终端节点。我们证明,在某些条件下,这种分区与树结构一致。反过来,这导致了小远子的显着更简单的合并程序。我们证明了STDR在统计上是一致的,并绑定了以高概率准确恢复树所需的样本数量。使用来自近几种常见树模型的模拟数据在系统发育中,我们证明STDR在运行时具有显着的优势,具有改善或类似的准确性。
translated by 谷歌翻译
We propose a notation for tensors with named axes, which relieves the author, reader, and future implementers of machine learning models from the burden of keeping track of the order of axes and the purpose of each. The notation makes it easy to lift operations on low-order tensors to higher order ones, for example, from images to minibatches of images, or from an attention mechanism to multiple attention heads. After a brief overview and formal definition of the notation, we illustrate it through several examples from modern machine learning, from building blocks like attention and convolution to full models like Transformers and LeNet. We then discuss differential calculus in our notation and compare with some alternative notations. Our proposals build on ideas from many previous papers and software libraries. We hope that our notation will encourage more authors to use named tensors, resulting in clearer papers and more precise implementations.
translated by 谷歌翻译
We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance. * Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We especially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work.
translated by 谷歌翻译